Ignored Module tests: provide required input tensor#19028
Ignored Module tests: provide required input tensor#19028psiddh merged 3 commits intopytorch:mainfrom
Conversation
All 4 tests failed because they called forward() with zero arguments on mobilenet_v2 which expects a [1,3,224,224] float input. This was a test bug, not a runtime bug. Add a dummyInput() helper that creates a Tensor.ones with the correct shape, and remove all @ignore annotations. This commit was authored with the help of Claude.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19028
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New Failure, 43 Pending, 1 Unrelated FailureAs of commit b3b7564 with merge base 66e4656 ( NEW FAILURE - The following job has failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Pull request overview
Fixes previously ignored Android instrumentation tests for Module by providing the required MobilenetV2 input tensor so forward()/execute() are invoked with valid arguments.
Changes:
- Remove
@Ignoreannotations from four instrumentation tests. - Add a
dummyInput()helper that constructs a[1, 3, 224, 224]float tensor and pass it intoforward()/execute(). - Update the multi-threaded forward test to use the dummy input.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| val results = module.forward(EValue.from(dummyInput())) | ||
| Assert.assertTrue(results[0].isTensor) |
There was a problem hiding this comment.
In the multi-thread test each worker allocates a new 1x3x224x224 float tensor via dummyInput(). With 100 threads this can create significant short-lived allocations (~60MB+), which can make the instrumentation test flaky on memory-constrained devices. Consider pre-creating a single Tensor (or even EValue) and reusing it across threads/tests to avoid repeated allocations.
|
@claude review this PR |
|
Claude finished @JacobSzwejbka's task in 52s —— View job Review of PR #19028
The fix is correct. The 4 tests were Minor items:
Verdict: This is a clean, minimal fix. LGTM. |
|
@claude explain the ci failures on this PR |
|
Claude finished @JacobSzwejbka's task in 1m 47s —— View job
CI Failure AnalysisThis PR only modifies a single file ( 1. New Failure:
|
Co-authored-by: Copilot <[email protected]>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
This commit was authored with the help of Claude.
## Summary Reverts the following Android PRs: - #19099 — Android: consistent error types across all modules - #19124 — Android: Module implements Closeable - #19092 — Android: improve error diagnostics for LlmModule and exceptions - #19028 — Ignored Module tests: provide required input tensor Authored with Claude.
All 4 tests failed because they called forward() with zero arguments on mobilenet_v2 which expects a [1,3,224,224] float input. This was a test bug, not a runtime bug. Add a dummyInput() helper that creates a Tensor.ones with the correct shape, and remove all @ignore annotations.